61 research outputs found

    Group-wise 3D registration based templates to study the evolution of ant worker neuroanatomy

    Full text link
    The evolutionary success of ants and other social insects is considered to be intrinsically linked to division of labor and emergent collective intelligence. The role of the brains of individual ants in generating these processes, however, is poorly understood. One genus of ant of special interest is Pheidole, which includes more than a thousand species, most of which are dimorphic, i.e. their colonies contain two subcastes of workers: minors and majors. Using confocal imaging and manual annotations, it has been demonstrated that minor and major workers of different ages of three species of Pheidole have distinct patterns of brain size and subregion scaling. However, these studies require laborious effort to quantify brain region volumes and are subject to potential bias. To address these issues, we propose a group-wise 3D registration approach to build for the first time bias-free brain atlases of intra- and inter-subcaste individuals and automatize the segmentation of new individuals.Comment: 10 pages, 5 figures, preprint for conference (not reviewed

    A Statistically Representative Atlas for Mapping Neuronal Circuits in the Drosophila Adult Brain

    Get PDF
    Published: 23 March 2018The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fninf.2018.00013/full#supplementary-material Supplementary Figure 1. 3D renderings of the 14 regions used for quantitative evaluation of atlas performances in segmentation and registration tasks. The 14 regions shown here were extracted from the atlas of Ito et al. (2014) that has been registered onto the group-wise inter-sex atlas (available from http://fruitfly.tefor.net). Supplementary Figure 2. Selected lines from the Janelia Farm collection showing an overlap value with the search pattern ranking among the first 50 for at least three of the five PDF profiles. (Left) GAL4-driven GFP profile registered on the standard brain. (Right) overlap between the first PDF profile and the GAL4-driven GFP profile. Numbers refer to Janelia Farm lines with associated gene names. Scale bar: 20 μm. Supplementary Table 1. Results of the 3D space query for each of the five PDF profiles. Overlap values are indicated for each Janelia Farm line and the corresponding gene name (FlyBase nomenclature) is indicated for the overlap values ranking among the first 50 for at least three of the five PDF profiles (blue). Bold names correspond to the three lines shown in Figure 10. Supplementary Movie 1. Animated rendering of the group-wise inter-sex atlas. Successively: nc82 template image (2D sections then 3D volume rendering, opaque then transparent); label image (3D surface rendering of anatomical regions, defined following Ito et al. 2014); six registered patterns of GAL4-GFP expression (3D surface rendering of intensity-thresholded pattern images); same patterns (left half of the brain) with the anatomical regions (right half of the brain).Imaging the expression patterns of reporter constructs is a powerful tool to dissect the neuronal circuits of perception and behavior in the adult brain of Drosophila, one of the major models for studying brain functions. To date, several Drosophila brain templates and digital atlases have been built to automatically analyze and compare collections of expression pattern images. However, there has been no systematic comparison of performances between alternative atlasing strategies and registration algorithms. Here, we objectively evaluated the performance of different strategies for building adult Drosophila brain templates and atlases. In addition, we used state-of-the-art registration algorithms to generate a new group-wise inter-sex atlas. Our results highlight the benefit of statistical atlases over individual ones and show that the newly proposed inter-sex atlas outperformed existing solutions for automated registration and annotation of expression patterns. Over 3,000 images from the Janelia Farm FlyLight collection were registered using the proposed strategy. These registered expression patterns can be searched and compared with a new version of the BrainBaseWeb system and BrainGazer software. We illustrate the validity of our methodology and brain atlas with registration-based predictions of expression patterns in a subset of clock neurons. The described registration framework should benefit to brain studies in Drosophila and other insect species.IA-C, TM, NM, FS, and AJ were funded by the Tefor Infrastructure under the Investments for the Future program of the French National Research Agency (Grant #ANR-11-INBS-0014). FR was supported by INSERM. Work at Institut des Neurosciences Paris-Saclay was supported by ANR Infrastructure Tefor and by ANR ClockEye(#ANR-14-CE13-0034-01). JI was supported by the Spanish Ministry of Economy and Competitiveness (TEC2014-51882-P), the European Union's Horizon 2020 research and innovation programme (Marie Sklodowska-Curie grant 654911, project THALAMODEL), and the European Research Council (ERC Starting Grant no. 677697 BUNGEE-TOOLS). VRVis (KB, FS) is funded by BMVIT, BMWFW, Styria, SFG and Vienna Business Agency in the scope of COMET - Competence Centers for Excellent Technologies (854174) which is managed by FFG. The Institut Jean-Pierre Bourgin benefits from the support of the LabEx Saclay Plant Sciences-SPS (#ANR-10-LABX-0040-SPS)

    Blood Cell Revolution: Unveiling 11 Distinct Types with ‘Naturalize’ Augmentation

    Get PDF
    Artificial intelligence (AI) has emerged as a cutting-edge tool, simultaneously accelerating, securing, and enhancing the diagnosis and treatment of patients. An exemplification of this capability is evident in the analysis of peripheral blood smears (PBS). In university medical centers, hematologists routinely examine hundreds of PBS slides daily to validate or correct outcomes produced by advanced hematology analyzers assessing samples from potentially problematic patients. This process may logically lead to erroneous PBC readings, posing risks to patient health. AI functions as a transformative tool, significantly improving the accuracy and precision of readings and diagnoses. This study reshapes the parameters of blood cell classification, harnessing the capabilities of AI and broadening the scope from 5 to 11 specific blood cell categories with the challenging 11-class PBC dataset. This transformation facilitates a more profound exploration of blood cell diversity, surpassing prior constraints in medical image analysis. Our approach combines state-of-the-art deep learning techniques, including pre-trained ConvNets, ViTb16 models, and custom CNN architectures. We employ transfer learning, fine-tuning, and ensemble strategies, such as CBAM and Averaging ensembles, to achieve unprecedented accuracy and interpretability. Our fully fine-tuned EfficientNetV2 B0 model sets a new standard, with a macro-average precision, recall, and F1-score of 91%, 90%, and 90%, respectively, and an average accuracy of 93%. This breakthrough underscores the transformative potential of 11-class blood cell classification for more precise medical diagnoses. Moreover, our groundbreaking “Naturalize” augmentation technique produces remarkable results. The 2K-PBC dataset generated with “Naturalize” boasts a macro-average precision, recall, and F1-score of 97%, along with an average accuracy of 96% when leveraging the fully fine-tuned EfficientNetV2 B0 model. This innovation not only elevates classification performance but also addresses data scarcity and bias in medical deep learning. Our research marks a paradigm shift in blood cell classification, enabling more nuanced and insightful medical analyses. The “Naturalize” technique’s impact extends beyond blood cell classification, emphasizing the vital role of diverse and comprehensive datasets in advancing healthcare applications through deep learning.This work is supported by grant PID2021-126701OB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, and by grant GIU19/027 funded by the University of the Basque Country UPV/EHU

    White Blood Cell Classification: Convolutional Neural Network (CNN) and Vision Transformer (ViT) under Medical Microscope

    Get PDF
    Deep learning (DL) has made significant advances in computer vision with the advent of vision transformers (ViTs). Unlike convolutional neural networks (CNNs), ViTs use self-attention to extract both local and global features from image data, and then apply residual connections to feed these features directly into a fully networked multilayer perceptron head. In hospitals, hematologists prepare peripheral blood smears (PBSs) and read them under a medical microscope to detect abnormalities in blood counts such as leukemia. However, this task is time-consuming and prone to human error. This study investigated the transfer learning process of the Google ViT and ImageNet CNNs to automate the reading of PBSs. The study used two online PBS datasets, PBC and BCCD, and transferred them into balanced datasets to investigate the influence of data amount and noise immunity on both neural networks. The PBC results showed that the Google ViT is an excellent DL neural solution for data scarcity. The BCCD results showed that the Google ViT is superior to ImageNet CNNs in dealing with unclean, noisy image data because it is able to extract both global and local features and use residual connections, despite the additional time and computational overhead.This work is supported by grant PID2021-126701OB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, and by grant GIU19/027 funded by the University of the Basque Country UPV/EHU

    Stable deep neural network architectures for mitochondria segmentation on electron microscopy volumes

    Get PDF
    Electron microscopy (EM) allows the identification of intracellular organelles such as mitochondria, providing insights for clinical and scientific studies. In recent years, a number of novel deep learning architectures have been published reporting superior performance, or even human-level accuracy, compared to previous approaches on public mitochondria segmentation datasets. Unfortunately, many of these publications do not make neither the code nor the full training details public to support the results obtained, leading to reproducibility issues and dubious model comparisons. For that reason, and following a recent code of best practices for reporting experimental results, we present an extensive study of the state-of-the-art deep learning architectures for the segmentation of mitochondria on EM volumes, and evaluate the impact in performance of different variations of 2D and 3D U-Net-like models for this task. To better understand the contribution of each component, a common set of pre- and post-processing operations has been implemented and tested with each approach. Moreover, an exhaustive sweep of hyperparameters values for all architectures have been performed and each configuration has been run multiple times to report the mean and standard deviation values of the evaluation metrics. Using this methodology, we found very stable architectures and hyperparameter configurations that consistently obtain state-of-the-art results in the well-known EPFL Hippocampus mitochondria segmentation dataset. Furthermore, we have benchmarked our proposed models on two other available datasets, Lucchi++ and Kasthuri++, where they outperform all previous works. The code derived from this research and its documentation are publicly available

    Egocentric Vision-based Action Recognition: A survey

    Get PDF
    [EN] The egocentric action recognition EAR field has recently increased its popularity due to the affordable and lightweight wearable cameras available nowadays such as GoPro and similars. Therefore, the amount of egocentric data generated has increased, triggering the interest in the understanding of egocentric videos. More specifically, the recognition of actions in egocentric videos has gained popularity due to the challenge that it poses: the wild movement of the camera and the lack of context make it hard to recognise actions with a performance similar to that of third-person vision solutions. This has ignited the research interest on the field and, nowadays, many public datasets and competitions can be found in both the machine learning and the computer vision communities. In this survey, we aim to analyse the literature on egocentric vision methods and algorithms. For that, we propose a taxonomy to divide the literature into various categories with subcategories, contributing a more fine-grained classification of the available methods. We also provide a review of the zero-shot approaches used by the EAR community, a methodology that could help to transfer EAR algorithms to real-world applications. Finally, we summarise the datasets used by researchers in the literature.We gratefully acknowledge the support of the Basque Govern-ment's Department of Education for the predoctoral funding of the first author. This work has been supported by the Spanish Government under the FuturAAL-Context project (RTI2018-101045-B-C21) and by the Basque Government under the Deustek project (IT-1078-16-D)

    3D Object Detection From LiDAR Data Using Distance Dependent Feature Extraction

    Full text link
    This paper presents a new approach to 3D object detection that leverages the properties of the data obtained by a LiDAR sensor. State-of-the-art detectors use neural network architectures based on assumptions valid for camera images. However, point clouds obtained from LiDAR are fundamentally different. Most detectors use shared filter kernels to extract features which do not take into account the range dependent nature of the point cloud features. To show this, different detectors are trained on two splits of the KITTI dataset: close range (objects up to 25 meters from LiDAR) and long-range. Top view images are generated from point clouds as input for the networks. Combined results outperform the baseline network trained on the full dataset with a single backbone. Additional research compares the effect of using different input features when converting the point cloud to image. The results indicate that the network focuses on the shape and structure of the objects, rather than exact values of the input. This work proposes an improvement for 3D object detectors by taking into account the properties of LiDAR point clouds over distance. Results show that training separate networks for close-range and long-range objects boosts performance for all KITTI benchmark difficulties.Comment: 10 pages, 8 figures, 6th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2020

    A generic classification-based method for segmentation of nuclei in 3D images of early embryos

    Get PDF
    BACKGROUND: Studying how individual cells spatially and temporally organize within the embryo is a fundamental issue in modern developmental biology to better understand the first stages of embryogenesis. In order to perform high-throughput analyses in three-dimensional microscopic images, it is essential to be able to automatically segment, classify and track cell nuclei. Many 3D/4D segmentation and tracking algorithms have been reported in the literature. Most of them are specific to particular models or acquisition systems and often require the fine tuning of parameters. RESULTS: We present a new automatic algorithm to segment and simultaneously classify cell nuclei in 3D/4D images. Segmentation relies on training samples that are interactively provided by the user and on an iterative thresholding process. This algorithm can correctly segment nuclei even when they are touching, and remains effective under temporal and spatial intensity variations. The segmentation is coupled to a classification of nuclei according to cell cycle phases, allowing biologists to quantify the effect of genetic perturbations and drug treatments. Robust 3D geometrical shape descriptors are used as training features for classification. Segmentation and classification results of three complete datasets are presented. In our working dataset of the Caenorhabditis elegans embryo, only 21 nuclei out of 3,585 were not detected, the overall F-score for segmentation reached 0.99, and more than 95% of the nuclei were classified in the correct cell cycle phase. No merging of nuclei was found. CONCLUSION: We developed a novel generic algorithm for segmentation and classification in 3D images. The method, referred to as Adaptive Generic Iterative Thresholding Algorithm (AGITA), is freely available as an ImageJ plug-in

    Identification and measurement of tropical tuna species in purse seiner catches using computer vision and deep learning

    Get PDF
    Fishery monitoring programs are essential for effective management of marine resources, as they provide scientists and managers with the necessary data for both the preparation of scientific advice and fisheries control and surveillance. The monitoring is generally done by human observers, both in port and onboard, with a high cost involved. Consequently, some Regional Fisheries Management Organizations (RFMO) are opting for electronic monitoring (EM) as an alternative or complement to human observers in certain fisheries. This is the case of the tropical tuna purse seine fishery operating in the Indian and Atlantic oceans, which started an EM program on a voluntary basis in 2017. However, even when the monitoring is conducted though EM, the image analysis is a tedious task manually performed by experts. In this paper, we propose a cost-effective methodology for the automatic processing of the images already being collected by cameras onboard tropical tuna purse seiners. Firstly, the images are preprocessed to homogenize them across all vessels and facilitate subsequent steps. Secondly, the fish are individually segmented using a deep neural network (Mask R-CNN). Then, all segments are passed through other deep neural network (ResNet50V2) to classify them by species and estimate their size distribution. For the classification of fish, we achieved an accuracy for all species of over 70%, i.e., about 3 out of 4 individuals are correctly classified to their corresponding species. The size distribution estimates are aligned with official port measurements but calculated using a larger number of individuals. Finally, we also propose improvements to the current image capture systems which can facilitate the work of the proposed automation methodology.This project is funded by the Basque Government, and the Spanish fisheries ministry through the EU next Generation funds. Jose A. Fernandes'work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 869342 (SusTunTech). This work is supported in part by the University of the Basque Country UPV/EHU grant GIU19/027. We want to thank the expert analysts who helped to annotate images with incredible effort: Manuel Santos and Inigo Krug. We also like to extend our gratitude to Marine Instruments for providing the necessary equipment tocollect the data. This paper is contribution no 1080 from AZTI, Marine Research, Basque Research and Technology Alliance (BRTA)
    corecore